Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update distributed_deployment.md #1281

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

generall
Copy link
Member

@generall generall commented Nov 9, 2024

No description provided.

Copy link

netlify bot commented Nov 9, 2024

Deploy Preview for condescending-goldwasser-91acf0 ready!

Name Link
🔨 Latest commit 8eda527
🔍 Latest deploy log https://app.netlify.com/sites/condescending-goldwasser-91acf0/deploys/672fccb26fec6d00086597ad
😎 Deploy Preview https://deploy-preview-1281--condescending-goldwasser-91acf0.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

By default, the cluster continues to accept updates as long as at least one replica of each shard is online. However, this behavior means that once an offline replica is restored, it will require additional synchronization with the rest of the cluster. In some cases, this synchronization can be resource-intensive and undesirable.

Setting the write_consistency_factor to match the replication factor modifies the cluster's behavior so that unreplicated updates are rejected, preventing the need for extra synchronization.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would be nice to be a little bit more concrete on what happens:

Suggested change
If the update is applied to enough replicas - according to the `write_consistency_factor` - the update will return a successful status. Any replicas that failed to apply the update will be temporarily disabled and are automatically recovered to keep data consistency. If the update could not be applied to enough replicas, it'll return an error and may be partially applied. The user must submit the operation again to ensure data consistency.

Here I describe that the update will return a successful status if it was applied to enough replicas. That is not necessarily true if there are consensus problems. But I don't think we've to describe that edge case here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants